15 research outputs found

    From Data Fusion to Knowledge Fusion

    Get PDF
    The task of {\em data fusion} is to identify the true values of data items (eg, the true date of birth for {\em Tom Cruise}) among multiple observed values drawn from different sources (eg, Web sites) of varying (and unknown) reliability. A recent survey\cite{LDL+12} has provided a detailed comparison of various fusion methods on Deep Web data. In this paper, we study the applicability and limitations of different fusion techniques on a more challenging problem: {\em knowledge fusion}. Knowledge fusion identifies true subject-predicate-object triples extracted by multiple information extractors from multiple information sources. These extractors perform the tasks of entity linkage and schema alignment, thus introducing an additional source of noise that is quite different from that traditionally considered in the data fusion literature, which only focuses on factual errors in the original sources. We adapt state-of-the-art data fusion techniques and apply them to a knowledge base with 1.6B unique knowledge triples extracted by 12 extractors from over 1B Web pages, which is three orders of magnitude larger than the data sets used in previous data fusion papers. We show great promise of the data fusion approaches in solving the knowledge fusion problem, and suggest interesting research directions through a detailed error analysis of the methods.Comment: VLDB'201

    Automatic generation of shape models using nonrigid registration with a single segmented template mesh

    No full text
    Statistical shape modeling using point distribution models (PDMs) has been studied extensively for segmentation and other image analysis tasks. Methods investigated in the literature begin with a set of segmented training images and attempt to find point correspondences between the segmented shapes before performing the statistical analysis. This requires a time-consuming preprocessing stage where each shape must be manually or semi-automatically segmented by an expert. In this paper we present a method for PDM generation requiring only one shape to be segmented prior to the training phase. The mesh representation generated from the single template shape is then propagated to the other training shapes using a nonrigid registration process. This automatically produces a set of meshes with correspondences between them. The resulting meshes are combined through Procrustes analysis and principal component analysis into a statistical model. A model of the C7 vertebra was created and evaluated for accuracy and compactness.
    corecore